33 research outputs found

    Deep Attributes Driven Multi-Camera Person Re-identification

    Full text link
    The visual appearance of a person is easily affected by many factors like pose variations, viewpoint changes and camera parameter differences. This makes person Re-Identification (ReID) among multiple cameras a very challenging task. This work is motivated to learn mid-level human attributes which are robust to such visual appearance variations. And we propose a semi-supervised attribute learning framework which progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely \emph{deep attributes} exhibit superior generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained surprisingly good accuracy on four person ReID datasets. Experiments also show that a simple metric learning modular further boosts our method, making it significantly outperform many recent works.Comment: Person Re-identification; 17 pages; 5 figures; In IEEE ECCV 201

    Investigating Open-World Person Re-identification Using a Drone

    Get PDF
    Abstract. Person re-identification is now one of the most topical and intensively studied problems in computer vision due to its challenging na-ture and its critical role in underpinning many multi-camera surveillance tasks. A fundamental assumption in almost all existing re-identification research is that cameras are in fixed emplacements, allowing the explicit modelling of camera and inter-camera properties in order to improve re-identification. In this paper, we present an introductory study push-ing re-identification in a different direction: re-identification on a mobile platform, such as a drone. We formalise some variants of the standard formulation for re-identification that are more relevant for mobile re-identification. We introduce the first dataset for mobile re-identification, and we use this to elucidate the unique challenges of mobile re-identification. Finally, we re-evaluate some conventional wisdom about re-id models in the light of these challenges and suggest future avenues for research in this area.

    Real-time Person Re-identification at the Edge: A Mixed Precision Approach

    Full text link
    A critical part of multi-person multi-camera tracking is person re-identification (re-ID) algorithm, which recognizes and retains identities of all detected unknown people throughout the video stream. Many re-ID algorithms today exemplify state of the art results, but not much work has been done to explore the deployment of such algorithms for computation and power constrained real-time scenarios. In this paper, we study the effect of using a light-weight model, MobileNet-v2 for re-ID and investigate the impact of single (FP32) precision versus half (FP16) precision for training on the server and inference on the edge nodes. We further compare the results with the baseline model which uses ResNet-50 on state of the art benchmarks including CUHK03, Market-1501, and Duke-MTMC. The MobileNet-V2 mixed precision training method can improve both inference throughput on the edge node, and training time on server 3.25×3.25\times reaching to 27.77fps and 1.75×1.75\times, respectively and decreases power consumption on the edge node by 1.45×1.45\times, while it deteriorates accuracy only 5.6\% in respect to ResNet-50 single precision on the average for three different datasets. The code and pre-trained networks are publicly available at https://github.com/TeCSAR-UNCC/person-reid.Comment: This is a pre-print of an article published in International Conference on Image Analysis and Recognition (ICIAR 2019), Lecture Notes in Computer Science. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-27272-2_

    Person Re-identification Using Clustering Ensemble Prototypes

    Full text link
    Abstract. This paper presents an appearance-based model to deal with the person re-identification problem. Usually in a crowded scene, it is ob-served that, the appearances of most people are similar with regard to the combination of attire. In such situation it is a difficult task to distin-guish an individual from a group of alike looking individuals and yields an ambiguity in recognition for re-identification. The proper organiza-tion of the individuals based on the appearance characteristics leads to recognize the target individual by comparing with a particular group of similar looking individuals. To reconstruct a group of individual accord-ing to their appearance is a crucial task for person re-identification. In this work we focus on unsupervised based clustering ensemble approach for discovering prototypes where each prototype represents similar set of gallery image instances. The formation of each prototype depends upon the appearance characteristics of gallery instances. The estimation of k-NN classifier is employed to specify a prototype to a given probe image. The similarity measure computation is performed between the probe and a subset of gallery images, that shares the same prototype with the probe and thus reduces the number of comparisons. Re-identification perfor-mance on benchmark datasets are presented using cumulative matching characteristic (CMC) curves.

    Temporal Model Adaptation for Person Re-Identification

    Full text link
    Person re-identification is an open and challenging problem in computer vision. Majority of the efforts have been spent either to design the best feature representation or to learn the optimal matching metric. Most approaches have neglected the problem of adapting the selected features or the learned model over time. To address such a problem, we propose a temporal model adaptation scheme with human in the loop. We first introduce a similarity-dissimilarity learning method which can be trained in an incremental fashion by means of a stochastic alternating directions methods of multipliers optimization procedure. Then, to achieve temporal adaptation with limited human effort, we exploit a graph-based approach to present the user only the most informative probe-gallery matches that should be used to update the model. Results on three datasets have shown that our approach performs on par or even better than state-of-the-art approaches while reducing the manual pairwise labeling effort by about 80%

    Unsupervised Person Re-identification by Deep Learning Tracklet Association

    Get PDF
    © 2018, Springer Nature Switzerland AG. Most existing person re-identification (re-id) methods rely on supervised model learning on per-camera-pair manually labelled pairwise training data. This leads to poor scalability in practical re-id deployment due to the lack of exhaustive identity labelling of image positive and negative pairs for every camera pair. In this work, we address this problem by proposing an unsupervised re-id deep learning approach capable of incrementally discovering and exploiting the underlying re-id discriminative information from automatically generated person tracklet data from videos in an end-to-end model optimisation. We formulate a Tracklet Association Unsupervised Deep Learning (TAUDL) framework characterised by jointly learning per-camera (within-camera) tracklet association (labelling) and cross-camera tracklet correlation by maximising the discovery of most likely tracklet relationships across camera views. Extensive experiments demonstrate the superiority of the proposed TAUDL model over the state-of-the-art unsupervised and domain adaptation re-id methods using six person re-id benchmarking datasets

    Person re-identification with soft biometrics through deep learning

    Get PDF
    Re-identification of persons is usually based on primary biometric features such as their faces, fingerprints, iris or gait. However, in most existing video surveillance systems, it is difficult to obtain these features due to the low resolution of surveillance footages and unconstrained real-world environments. As a result, most of the existing person re-identification techniques only focus on overall visual appearance. Recently, the use of soft biometrics has been proposed to improve the performance of person re-identification. Soft biometrics such as height, gender, age are physical or behavioural features, which can be described by humans. These features can be obtained from low-resolution videos at a distance ideal for person re-identification application. In addition, soft biometrics are traits for describing an individual with human-understandable labels. It allows human verbal descriptions to be used in the person re-identification or person retrieval systems. In some deep learning based person re-identification methods, soft biometrics attributes are integrated into the network to boot the robustness of the feature representation. Biometrics can also be utilised as a domain adaptation bridge for addressing the cross-dataset person re-identification problem. This chapter will review the state-of-the-art deep learning methods involving soft biometrics from three perspectives: supervised, semi-supervised and unsupervised approaches. In the end, we discuss the existing issues that are not addressed by current works

    Region-Based Interactive Ranking Optimization for Person Re-identification

    No full text
    corecore